Goto

Collaborating Authors

 event manifold


High-order expansion of Neural Ordinary Differential Equations flows

Izzo, Dario, Origer, Sebastien, Acciarini, Giacomo, Biscani, Francesco

arXiv.org Artificial Intelligence

To whom correspondence should be addressed; E-mail: dario.izzo@esa.int These authors contributed equally to this work. Neural networks are redefining our use of ordinary differential equations, both by enhancing their ability to model complex phenomena that challenge classical approaches and by enabling their interpretation as infinite-depth neural models. However, their practical applicability remains limited by the lack of explainability of the resulting dynamics, which is governed by an opaque neural network. Beyond numerical simulations, existing analytical approaches are largely restricted to leverage first-order gradient information due to computational constraints. The versatility of our approach is demonstrated by analyzing neural state-feedback systems, characterizing uncertainties in a data-driven prey-predator control model, and mapping landing trajectories in a three-body neural Hamiltonian ordinary differential equation. In all cases, the method enhances the rigor of the resulting analysis by describing the neural dynamics through explicit mathematical constructs. By leveraging and expanding the differentiable programming toolkit, these results contribute to a deeper understanding of event-triggered neural ordinary differential equations as models of complex systems and provide a valuable tool to explain their behavior . Introduction Ordinary differential equations (ODEs) provide a powerful framework for modeling a wide range of phenomena, from epidemiology and ecology to cosmology, engineering, chemistry, neuroscience, and meteorology. However, in each case, unmodeled and unknown effects can impact the analysis of the resulting dynamics, limiting the accuracy and scope of model-derived conclusions. When modeled or observed data of the system are available, they can be leveraged to partially or fully identify the system, leading to data-driven approaches for discovering or refining ODE models.


Certifying Guidance & Control Networks: Uncertainty Propagation to an Event Manifold

Origer, Sebastien, Izzo, Dario, Acciarini, Giacomo, Biscani, Francesco, Mastroianni, Rita, Bannach, Max, Holt, Harry

arXiv.org Artificial Intelligence

We perform uncertainty propagation on an event manifold for Guidance & Control Networks (G&CNETs), aiming to enhance the certification tools for neural networks in this field. This work utilizes three previously solved optimal control problems with varying levels of dynamics nonlinearity and event manifold complexity. The G&CNETs are trained to represent the optimal control policies of a time-optimal interplanetary transfer, a mass-optimal landing on an asteroid and energy-optimal drone racing, respectively. For each of these problems, we describe analytically the terminal conditions on an event manifold with respect to initial state uncertainties. Crucially, this expansion does not depend on time but solely on the initial conditions of the system, thereby making it possible to study the robustness of the G&CNET at any specific stage of a mission defined by the event manifold. Once this analytical expression is found, we provide confidence bounds by applying the Cauchy-Hadamard theorem and perform uncertainty propagation using moment generating functions. While Monte Carlo-based (MC) methods can yield the results we present, this work is driven by the recognition that MC simulations alone may be insufficient for future certification of neural networks in guidance and control applications.